18 research outputs found

    Anti-Factor Is FPT Parameterized by Treewidth and List Size (But Counting Is Hard)

    Get PDF
    In the general AntiFactor problem, a graph G and, for every vertex v of G, a set X_v ? ? of forbidden degrees is given. The task is to find a set S of edges such that the degree of v in S is not in the set X_v. Standard techniques (dynamic programming plus fast convolution) can be used to show that if M is the largest forbidden degree, then the problem can be solved in time (M+2)^{tw}?n^{O(1)} if a tree decomposition of width tw is given. However, significantly faster algorithms are possible if the sets X_v are sparse: our main algorithmic result shows that if every vertex has at most x forbidden degrees (we call this special case AntiFactor_x), then the problem can be solved in time (x+1)^{O(tw)}?n^{O(1)}. That is, AntiFactor_x is fixed-parameter tractable parameterized by treewidth tw and the maximum number x of excluded degrees. Our algorithm uses the technique of representative sets, which can be generalized to the optimization version, but (as expected) not to the counting version of the problem. In fact, we show that #AntiFactor? is already #W[1]-hard parameterized by the width of the given decomposition. Moreover, we show that, unlike for the decision version, the standard dynamic programming algorithm is essentially optimal for the counting version. Formally, for a fixed nonempty set X, we denote by X-AntiFactor the special case where every vertex v has the same set X_v = X of forbidden degrees. We show the following lower bound for every fixed set X: if there is an ? > 0 such that #X-AntiFactor can be solved in time (max X+2-?)^{tw}?n^{O(1)} given a tree decomposition of width tw, then the Counting Strong Exponential-Time Hypothesis (#SETH) fails

    Fine-Grained Complexity of Regular Expression Pattern Matching and Membership

    Get PDF
    The currently fastest algorithm for regular expression pattern matching and membership improves the classical O(nm) time algorithm by a factor of about log^{3/2}n. Instead of focussing on general patterns we analyse homogeneous patterns of bounded depth in this work. For them a classification splitting the types in easy (strongly sub-quadratic) and hard (essentially quadratic time under SETH) is known. We take a very fine-grained look at the hard pattern types from this classification and show a dichotomy: few types allow super-poly-logarithmic improvements while the algorithms for the other pattern types can only be improved by a constant number of log-factors, assuming the Formula-SAT Hypothesis

    Domination and Cut Problems on Chordal Graphs with Bounded Leafage

    Get PDF

    Computing Generalized Convolutions Faster Than Brute Force

    Get PDF

    Subcubic certificates for CFL reachability

    Get PDF
    Many problems in interprocedural program analysis can be modeled as the context-free language (CFL) reachability problem on graphs and can be solved in cubic time. Despite years of efforts, there are no known truly sub-cubic algorithms for this problem. We study the related certification task: given an instance of CFL reachability, are there small and efficiently checkable certificates for the existence and for the non-existence of a path? We show that, in both scenarios, there exist succinct certificates (O(n^2) in the size of the problem) and these certificates can be checked in subcubic (matrix multiplication) time. The certificates are based on grammar-based compression of paths (for reachability) and on invariants represented as matrix inequalities (for non-reachability). Thus, CFL reachability lies in nondeterministic and co-nondeterministic subcubic time. A natural question is whether faster algorithms for CFL reachability will lead to faster algorithms for combinatorial problems such as Boolean satisfiability (SAT). As a consequence of our certification results, we show that there cannot be a fine-grained reduction from SAT to CFL reachability for a conditional lower bound stronger than n^ω, unless the nondeterministic strong exponential time hypothesis (NSETH) fails. In a nutshell, reductions from SAT are unlikely to explain the cubic bottleneck for CFL reachability. Our results extend to related subcubic equivalent problems: pushdown reachability and 2NPDA recognition; as well as to all-pairs CFL reachability. For example, we describe succinct certificates for pushdown non-reachability (inductive invariants) and observe that they can be checked in matrix multiplication time. We also extract a new hardest 2NPDA language, capturing the “hard core” of all these problems

    Anti-Factor is FPT Parameterized by Treewidth and List Size (but Counting is Hard)

    Get PDF
    In the general AntiFactor problem, a graph GG and, for every vertex vv of GG, a set XNX_\subseteq \mathbb{N} of forbidden degrees is given. The task is to find a set SS of edges such that the degree of vv in SS is \emph{not} in the set XvX_v. Standard techniques (dynamic programming plus fast convolution) can be used to show that if MM is the largest forbidden degree, then the problem can be solved in time O((M+2)k)O*((M+2)^k) if a tree decomposition of width kk is given. However, significantly faster algorithms are possible if the sets XvX_v are sparse: our main algorithmic result shows that if every vertex has at most xx forbidden degrees (we call this special case AntiFactor_x), then the problem can be solved in time O((x+1)O(k))O*((x+1)^{O(k)}). That is, AntiFactor_x is fixed-parameter tractable parameterized by treewidth kk and the maximum number xx of excluded degrees. Our algorithm uses the technique of representative sets, which can be generalized to the optimization version, but (as expected) not to the counting version of the problem. In fact, we show that #AntiFactor_1 is already #W[1]-hard parameterized by the width of the given decomposition. Moreover, we show that, unlike for the decision version, the standard dynamic programming algorithm is essentially optimal for the counting version. Formally, for a fixed nonempty set XX, we denote by X-AntiFactor the special case where every vertex vv has the same set Xv=XX_v=X of forbidden degrees. We show the following lower bound for every fixed set XX: if there is an ϵ>0\epsilon>0 such that #X-AntiFactor can be solved in time O((maxX+2ϵ)k)O*((\max X+2-\epsilon)^k) given a tree decomposition of width kk, then the Counting Strong Exponential-Time Hypothesis (#SETH) fails

    Anti-Factor is FPT Parameterized by Treewidth and List Size (but Counting is Hard)

    Get PDF
    In the general AntiFactor problem, a graph GG and, for every vertex vv of GG, a set XNX_\subseteq \mathbb{N} of forbidden degrees is given. The task is to find a set SS of edges such that the degree of vv in SS is \emph{not} in the set XvX_v. Standard techniques (dynamic programming plus fast convolution) can be used to show that if MM is the largest forbidden degree, then the problem can be solved in time O((M+2)k)O*((M+2)^k) if a tree decomposition of width kk is given. However, significantly faster algorithms are possible if the sets XvX_v are sparse: our main algorithmic result shows that if every vertex has at most xx forbidden degrees (we call this special case AntiFactor_x), then the problem can be solved in time O((x+1)O(k))O*((x+1)^{O(k)}). That is, AntiFactor_x is fixed-parameter tractable parameterized by treewidth kk and the maximum number xx of excluded degrees. Our algorithm uses the technique of representative sets, which can be generalized to the optimization version, but (as expected) not to the counting version of the problem. In fact, we show that #AntiFactor_1 is already #W[1]-hard parameterized by the width of the given decomposition. Moreover, we show that, unlike for the decision version, the standard dynamic programming algorithm is essentially optimal for the counting version. Formally, for a fixed nonempty set XX, we denote by X-AntiFactor the special case where every vertex vv has the same set Xv=XX_v=X of forbidden degrees. We show the following lower bound for every fixed set XX: if there is an ϵ>0\epsilon>0 such that #X-AntiFactor can be solved in time O((maxX+2ϵ)k)O*((\max X+2-\epsilon)^k) given a tree decomposition of width kk, then the Counting Strong Exponential-Time Hypothesis (#SETH) fails

    Tight Complexity Bounds for Counting Generalized Dominating Sets in Bounded-Treewidth Graphs

    Full text link
    We investigate how efficiently a well-studied family of domination-type problems can be solved on bounded-treewidth graphs. For sets σ,ρ\sigma,\rho of non-negative integers, a (σ,ρ)(\sigma,\rho)-set of a graph GG is a set SS of vertices such that N(u)Sσ|N(u)\cap S|\in \sigma for every uSu\in S, and N(v)Sρ|N(v)\cap S|\in \rho for every v∉Sv\not\in S. The problem of finding a (σ,ρ)(\sigma,\rho)-set (of a certain size) unifies standard problems such as Independent Set, Dominating Set, Independent Dominating Set, and many others. For all pairs of finite or cofinite sets (σ,ρ)(\sigma,\rho), we determine (under standard complexity assumptions) the best possible value cσ,ρc_{\sigma,\rho} such that there is an algorithm that counts (σ,ρ)(\sigma,\rho)-sets in time cσ,ρtwnO(1)c_{\sigma,\rho}^{\sf tw}\cdot n^{O(1)} (if a tree decomposition of width tw{\sf tw} is given in the input). For example, for the Exact Independent Dominating Set problem (also known as Perfect Code) corresponding to σ={0}\sigma=\{0\} and ρ={1}\rho=\{1\}, we improve the 3twnO(1)3^{\sf tw}\cdot n^{O(1)} algorithm of [van Rooij, 2020] to 2twnO(1)2^{\sf tw}\cdot n^{O(1)}. Despite the unusually delicate definition of cσ,ρc_{\sigma,\rho}, we show that our algorithms are most likely optimal, i.e., for any pair (σ,ρ)(\sigma, \rho) of finite or cofinite sets where the problem is non-trivial, and any ε>0\varepsilon>0, a (cσ,ρε)twnO(1)(c_{\sigma,\rho}-\varepsilon)^{\sf tw}\cdot n^{O(1)}-algorithm counting the number of (σ,ρ)(\sigma,\rho)-sets would violate the Counting Strong Exponential-Time Hypothesis (#SETH). For finite sets σ\sigma and ρ\rho, our lower bounds also extend to the decision version, showing that our algorithms are optimal in this setting as well. In contrast, for many cofinite sets, we show that further significant improvements for the decision and optimization versions are possible using the technique of representative sets

    Parameterized Complexity of Weighted Multicut in Trees

    Get PDF
    The \textsc{Edge Multicut} problem is a classical cut problem where given an undirected graph GG, a set of pairs of vertices P\mathcal{P}, and a budget kk, the goal is to determine if there is a set SS of at most kk edges such that for each (s,t)P(s,t) \in \mathcal{P}, GSG-S has no path from ss to tt. \textsc{Edge Multicut} has been relatively recently shown to be fixed-parameter tractable (FPT), parameterized by kk, by Marx and Razgon [SICOMP 2014], and independently by Bousquet et al. [SICOMP 2018]. In the weighted version of the problem, called \textsc{Weighted Edge Multicut} one is additionally given a weight function wt:E(G)N\texttt{wt} : E(G) \to \mathbb{N} and a weight bound ww, and the goal is to determine if there is a solution of size at most kk and weight at most ww. Both the FPT algorithms for \textsc{Edge Multicut} by Marx et al.\ and Bousquet et al.\ fail to generalize to the weighted setting. In fact, the weighted problem is non-trivial even on trees and determining whether \textsc{Weighted Edge Multicut} on trees is FPT was explicitly posed as an open problem by Bousquet et al.\ [STACS 2009]. In this article, we answer this question positively by designing an algorithm which uses a very recent result by Kim et al.\ [STOC 2022] about directed flow augmentation as subroutine. We also study a variant of this problem where there is no bound on the size of the solution, but the parameter is a structural property of the input, for example, the number of leaves of the tree. We strengthen our results by stating them for the more general vertex deletion version

    Domination and Cut Problems on Chordal Graphs with Bounded Leafage

    Get PDF
    The leafage of a chordal graph GG is the minimum integer \ell such that GG can be realized as an intersection graph of subtrees of a tree with \ell leaves. We consider structural parameterization by the leafage of classical domination and cut problems on chordal graphs. Fomin, Golovach, and Raymond~[ESA~20182018, Algorithmica~20202020] proved, among other things, that \textsc{Dominating Set} on chordal graphs admits an algorithm running in time 2O(2)nO(1)2^{\mathcal{O}(\ell^2)} \cdot n^{\mathcal{O}(1)}. We present a conceptually much simpler algorithm that runs in time 2O()nO(1)2^{\mathcal{O}(\ell)} \cdot n^{\mathcal{O}(1)}. We extend our approach to obtain similar results for \textsc{Connected Dominating Set} and \textsc{Steiner Tree}. We then consider the two classical cut problems \textsc{MultiCut with Undeletable Terminals} and \textsc{Multiway Cut with Undeletable Terminals}. We prove that the former is \textsf{W}[1]-hard when parameterized by the leafage and complement this result by presenting a simple nO()n^{\mathcal{O}(\ell)}-time algorithm. To our surprise, we find that \textsc{Multiway Cut with Undeletable Terminals} on chordal graphs can be solved, in contrast, in nO(1)n^{\mathcal{O}(1)}-time
    corecore